8 research outputs found

    Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning

    Get PDF
    Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment

    Deep learning-based positioning of visually impaired people in indoor environments

    Get PDF
    Wayfinding and navigation can present substantial challenges to visually impaired (VI) people. Some of the significant aspects of these challenges arise from the difficulty of knowing the location of a moving person with enough accuracy. Positioning and localization in indoor environments require unique solutions. Furthermore, positioning is one of the critical aspects of any navigation system that can assist a VI person with their independent movement. The other essential features of a typical indoor navigation system include pathfinding, obstacle avoidance, and capabilities for user interaction. This work focuses on the positioning of a VI person with enough precision for their use in indoor navigation. We aim to achieve this by utilizing only the capabilities of a typical smartphone. More specifically, our proposed approach is based on the use of the accelerometer, gyroscope, and magnetometer of a smartphone. We consider the indoor environment to be divided into microcells, with the vertex of each microcell being assigned two-dimensional local coordinates. A regression-based analysis is used to train a multilayer perceptron neural network to map the inertial sensor measurements to the coordinates of the vertex of the microcell corresponding to the position of the smartphone. In order to test our proposed solution, we used IPIN2016, a publicly-available multivariate dataset that divides the indoor environment into cells tagged with the inertial sensor data of a smartphone, in order to generate the training and validating sets. Our experiments show that our proposed approach can achieve a remarkable prediction accuracy of more than 94%, with a 0.65 m positioning error

    Comparision of pathfinding algorithms for visually impaired people in IoT based smart buildings

    No full text
    Indoor navigation is highly challenging for visually impaired, particularly when visiting an unknown environment with complex design. In addition, a person at the entrance of the building might not be aware of distant changes/disruption in the path to the destination. Internet of Things devices can become the foundation infrastructure for scanning the dynamic changes in such an environment. With the sensory data of the scanned nodes, a dynamic pathfinding algorithm can provide guided route considering the changes to the destination. There are various pathfinding algorithms proposed for indoor environment including A*, Dijkstra’s, probabilistic roadmap, recursive tree and orthogonal jump point search. However, there is no study done to find if these algorithms are suited to the special requirements of low vision people. We have carried out simulations in MATLAB to evaluate the performance of these algorithms based on parameters such as distance and nodes travelled execution time and safety. The results provide strong conclusion to implement most suitable orthogonal jump point search to achieve optimal and safe path for low vision people in complex buildings

    DynaPATH : dynamic learning based indoor navigation for VIP in IoT based environments

    No full text
    Traditionally, pathfinding is solved using classical search algorithms such as the Dijkstra's, A*, probabilistic roadmaps and jump point search. These algorithms are still more practical in a familiar environment that has minimum changes. However, these generated navigation path may lose their appropriateness as they cannot handle dynamic changes in the complex environment, restricting independent living of visually impaired people. Nowadays, a network of smart physical devices called Internet of Things has become a foundation infrastructure for indoor navigation and pathfinding. Although, variations in the environment are identified and stored by sensors, there is absence of a reasonable system that adapts to the variable circumstances and learns to react to the changes. In this paper, we introduce a learning based autonomous system DynaPATH that classifies events of dynamic environments and adapts to the changes. We have performed simulation in order to evaluate the effectiveness of our approach. The results of our approach are compared with performance of different pathfinding algorithms for VIP people. The simulation results display strong conclusion that our proposed system has high stability and is VIP friendly for navigation in a complex environment

    An improved positioning method in a smart building for visually impaired users

    No full text
    Pedestrians have a variety of tools that can assist them in travelling, including maps, kiosks, and signage. However, these facilities are inaccessible to visually impaired users. Moreover, voice aided feature with Global Positioning System (GPS) cannot be adopted in indoor applications due to signal strength attenuation and multipath effects. Internet of Things (IoT) has become a backbone for such navigation applications that can assist in locating a user within IoT equipped smart buildings. Despite the growing use of Wi-Fi and beacon technologies, smartphones are uniquely positioned to be a critical part of a localization solution. The popularity of smartphone is increased by the diverse array of microelectromechanical (MEMS) inertial sensors. This paper discusses the adaptive distance estimation algorithm for visually impaired people. It represents the use of a smartphone in a situation where external proximity sensors fail to share location information to supplement an indoor navigation system in dark areas. An improved fusion algorithm is presented that adapts the walking style of a user detecting right turns and headings. The proposed fusion algorithm depends on inertial sensors to detect the relative position of the moving user with the absolute initial position using ibeacon. Tests were carried out to determine the accuracy of steps travelled, orientation and heading information for a user holding a smartphone. Our approach estimates heading and orientation from 3-axis inertial sensors (gyroscope, accelerometer and magnetometer) have shown accuracy more than 95%. The positioning root-mean-square error (RMSE) calculation results have demonstrated that the hybrid fusion algorithm can achieve a real-time positioning and reduce the error of indoor positioning

    Indoor positioning framework for visually impaired people using Internet of Things

    No full text
    To overcome the limitation of Global positioning system (GPS) in indoor environments, various indoor positioning system have been developed using Wi-Fi, Bluetooth, Ultrawideband (UWB) and radio-frequency identification (RFID). Amongst them, Wi-Fi technologies are most commonly used for indoor navigation. Wi-Fi signals may be unavailable in some areas due to obstacles and unreachable coverages. Despite of it, the accuracy achieved by Wi-Fi is between 5-15 m that is unfavorable for visually impaired people. The popularity of beacons for positioning and smartphones with built-in inertial sensors plays a vital role in developing potential indoor navigation system. This paper presents a framework for visually impaired person (VIP) based on inertial sensors of smartphones and Bluetooth beacons. Beacons/proximity sensors in a building can help a pedestrian to navigate between two landmarks/points of interest via turn-by-turn navigation. However, there are certain areas in the building where external sensing is absent in a big hallway or dark alley. This model demonstrates that inertial sensors are useful to track a VIP in dark areas. Also. minimizes the use of external sensors between two landmarks/beacons. The performance of the proposed framework with the fusion algorithm in an android application is examined by conducting trajectory test on a smartphone. The experimental results of the walking traces show that the system has high accuracy with almost 1.5-2 m mean position error which could be improved further by implementing magnetometer based position learning techniques

    Localization techniques in indoor navigation system for visually impaired people

    No full text
    Indoor navigation is an active area to tackle the problems related to locate an object or person and to explore several domains ranging from emergency response to improving marketing strategies in micro indoor spaces. This paper aims to provide the review of emerging indoor technologies explored to resolve indoor navigation for Visually Impaired people. This paper discusses various positioning enabled wireless technologies and algorithms used in real-world scenarios for providing indoor navigation with a comprehensive study about their advantages and disadvantages
    corecore